Thoughts on "The Nature and Development of Operations Research" by Kittel (1947), Chris Ryan
2. "The Nature and Development of Operations Research", Charles Kittel, Science, Vol. 105, No. 2719 (Feb. 7, 1947), pp. 150-153 [link]
My personal exploration:
Charles Kittel’s 1947 article, The Nature and Development of Operations Research, is one of the earliest attempts to codify what had emerged during the war years into something resembling a new scientific discipline. Appearing in Science, the flagship organ of the American scientific establishment, the paper is as significant for its placement as for its content. That a physicist and Guggenheim Fellow like Kittel should use such a platform to explain this wartime innovation underscores both the prestige of operations research (OR) at that moment and the authority that “science” carried in shaping policy and administration in the mid-20th century.
Kittel begins with a crisp definition: “Operations research is a scientific method for providing executive departments with a quantitative basis for decisions”. This definition was highly influential and oft-repeated in the early days of OR. The definition has unclear origins, with some attributing it to Kimball and Morse and others to Kittel in this article.
Kittel's definition captures the ambition of OR to be simultaneously scientific, quantitative, and managerial. Kittel provides several examples—U-boat hunting, large convoys, bombing campaigns—all drawn from wartime projects. They emphasize that OR was not simply about creating new weapons (such as the atomic bomb, another project driven by former research scientists), but about optimizing the use of existing forces through analysis of past operations.
In making this distinction, Kittel insists: “Operations research is thus distinguished from laboratory research for military purposes, which is concerned with the continual improvement of the weapons of warfare. Furthermore, the elapsed times between the inception of a new proposal and its realization in large-scale combat are radically different for laboratory and operations research”.
The “operations” in OR meant research not in the laboratory but in the field, with feedback cycles measured in days (or even hours) rather than months or years. This emphasis helped me think about how the word "operations" was meant to modify the concept of "research". There were different types of research underway during World War II, what we might call "research and development" now, more associated with what Kittel calls "laboratory research" and "field" or "operations" research.
This introduces a theme I will return to again and again in this annotated bibliography, how the definition and empahsis of the concept of "Operations Research" is defined in contrast to related fields and areas the author concerned about distinguishing their work from at the time. In the history of science, this is called "boundary work", as best exemplified in the book Cultural Boundaries of Science by Thomas Gieryn.
For Kittel, and the audience of Science they wanted to view Operations Research as a type of science, but distinguished from other forms of science that were active during the war effort. The concept of "operations", as opposed to "laboratory" research, makes a boundary clear and finds space for this new "science".
Why Scientists?
One of the recurring questions of the early post-war era (and a recurring theme in this annotated bibliography) was why trained scientists were needed for operations research work that often appeared to be very simple. Kittel’s answer is twofold: scientists were trained to reject unsupported claims and to seek quantitative bases for judgment, and they were adept at stripping problems down to fundamentals. This reflects the broader cultural conviction, described by Theodore Porter in his book Trust in Numbers, that quantitative reasoning conferred objectivity. A scientist, in Kittel’s framing, brought not just technical competence but a mentality: suspicion of anecdote, comfort with abstraction, and a drive toward generalization and a driving out of subjective "intuition".
Interestingly, he also observes that the OR “mentality” appeared most often in physicists and biologists rather than in mathematicians. This is telling. Circa mid-century, mathematics was leaning toward abstraction—the Hilbert program, the influence of Bourbaki, the tendency to value formal structure over empirical engagement. This historian of mathematics Leo Corry, and others, have documented this cultural turn in mathematics. By contrast, physicists and biologists were habituated to messy data and experimental feedback, making them more naturally suited to the empirical-statistical blend that OR demanded.
Ratios, Effectiveness, and the Law of Large Numbers
Kittel’s illustrative examples all center on what he calls “exchange rates” or “effectiveness ratios”: ships sunk per torpedoes fired, bombs on target per aircraft lost, merchant vessels saved per escort built. At one level, the mathematics here is basic division. But there is an underlying statistical sophistication: an appeal to the law of large numbers, to the idea that stochastic regularities emerge in the aggregate. Blackett himself noted the “remarkable constancy” of these ratios across theaters and contexts, even when individual battles varied wildly.
This faith in regularity was of its time. As Ian Hacking argues in The Taming of Chance, the early- to mid-20th century saw a widespread conviction that probability could domesticate uncertainty, that averages revealed underlying laws. OR rode this wave. Its promise was greatest for large operations, where idiosyncrasies washed out. Kittel makes this point directly when turning to peacetime applications: “These enterprises, because of their national character and size, lend themselves more suitably to the application of operations research than do small local enterprises”. For small businesses or limited engagements, the ratios might wobble, the idiosyncrasies refuse to cancel, and OR would look less reliable.
The Problem of Data
Still, it would be a mistake to think this reliance on simple ratios made OR trivial. Behind each number lay arduous processes of data collection, interpretation, and verification. How did one know how many ships were sunk? Reconnaissance had to be flown, observations confirmed, photographs interpreted, claims cross-checked. Each effectiveness ratio presupposed an information system robust enough to produce stable numbers at scale. In this sense, OR was inseparable from the development of modern systems of command, control, communication, and intelligence (C³I). The centralized nature of global war—large convoys, coordinated bombing raids, worldwide submarine campaigns—made such information flows possible and necessary. Earlier wars, fought through local skirmishes, simply lacked the infrastructure for OR to thrive.
Open Questions
Reading Kittel today raises several open questions. How much of OR’s early authority came from its association with “science” as much or more than from any demonstrable improvements in outcomes? And does the reliance on large-scale statistical regularities limit OR’s applicability to smaller or more decentralized contexts—a tension still visible in contemporary debates about applying analytics to small businesses or community organizations?